Joint additive Kullback–Leibler residual minimization and regularization for linear inverse problems
نویسندگان
چکیده
For the approximate solution of ill-posed inverse problems, the formulation of a regularization functional involves two separate decisions: the choice of the residual minimizer and the choice of the regularizor. In this paper, the Kullback–Leibler functional is used for both. The resulting regularization method can solve problems for which the operator and the observational data are positive along with the solution, as occur in many inverse problem applications. Here, existence, uniqueness, convergence and stability for the regularization approximations are established under quite natural regularity conditions. Convergence rates are obtained by using an a priori strategy. Copyright q 2007 John Wiley & Sons, Ltd.
منابع مشابه
Regularization of multiplicative iterative algorithms with nonnegative constraint
This paper studies the regularization of constrained Maximum Likelihood iterative algorithms applied to incompatible ill-posed linear inverse problems. Specifically we introduce a novel stopping rule which defines a regularization algorithm for the Iterative Space Reconstruction Algorithm in the case of Least-Squares minimization. Further we show that the same rule regularizes the Expectation M...
متن کاملComparison of Kullback-Leibler, Hellinger and LINEX with Quadratic Loss Function in Bayesian Dynamic Linear Models: Forecasting of Real Price of Oil
In this paper we intend to examine the application of Kullback-Leibler, Hellinger and LINEX loss function in Dynamic Linear Model using the real price of oil for 106 years of data from 1913 to 2018 concerning the asymmetric problem in filtering and forecasting. We use DLM form of the basic Hoteling Model under Quadratic loss function, Kullback-Leibler, Hellinger and LINEX trying to address the ...
متن کاملKullback-Leibler Approximation for Probability Measures on Infinite Dimensional Spaces
In a variety of applications it is important to extract information from a probability measure μ on an infinite dimensional space. Examples include the Bayesian approach to inverse problems and (possibly conditioned) continuous time Markov processes. It may then be of interest to find a measure ν, from within a simple class of measures, which approximates μ. This problem is studied in the case ...
متن کاملIll-Posed and Linear Inverse Problems
In this paper ill-posed linear inverse problems that arises in many applications is considered. The instability of special kind of these problems and it's relation to the kernel, is described. For finding a stable solution to these problems we need some kind of regularization that is presented. The results have been applied for a singular equation.
متن کاملInformation Complexity-Based Regularization Parameter Selection for Solution of Ill-Conditioned Inverse Problems
We propose an information complexity-based regularization parameter selection method for solution of ill-conditioned inverse problems. The regularization parameter is selected to be the minimizer of the Kullback-Leibler (KL) distance between the unknown data-generating distribution and the fitted distribution. The KL distance is approximated by an information complexity (ICOMP) criterion develo...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2007